Toggle navigation
Home
About
About Journal
Historical Evolution
Indexed In
Awards
Reference Index
Editorial Board
Journal Online
Archive
Project Articles
Most Download Articles
Most Read Articles
Instruction
Contribution Column
Author Guidelines
Template
FAQ
Copyright Agreement
Expenses
Academic Integrity
Contact
Contact Us
Location Map
Subscription
Advertisement
中文
Journals
Publication Years
Keywords
Search within results
(((HUANG Hongcheng[Author]) AND 1[Journal]) AND year[Order])
AND
OR
NOT
Title
Author
Institution
Keyword
Abstract
PACS
DOI
Please wait a minute...
For Selected:
Download Citations
EndNote
Ris
BibTeX
Toggle Thumbnails
Select
Supernetwork link prediction method based on spatio-temporal relation in location-based social network
HU Min, CHEN Yuanhui, HUANG Hongcheng
Journal of Computer Applications 2018, 38 (
6
): 1682-1690. DOI:
10.11772/j.issn.1001-9081.2017122904
Abstract
(
315
)
PDF
(1605KB)(
432
)
Knowledge map
Save
The accuracy of link prediction in the existing methods for Location-Based Social Network (LBSN) is low due to the failure of integrating social factors, location factors and time factors effectively. In order to solve the problem, a supernetwork link prediction method based on spatio-temporal relation was proposed in LBSN. Firstly, aiming at the heterogeneity of network and the spatio-temporal relation among users in LBSN, the network was divided into four-layer supernetwork of "spatio-temporal-user-location-category" to reduce the coupling between the influencing factors. Secondly, considering the impact of edge weights on the network, the edge weights of subnets were defined and quantified by mining user influence, implicit association relationship, user preference and node degree information, and a four-layer weighted supernetwork model was built. Finally, on the basis of the weighted supernetwork model, the super edge as well as weighted super-edge structure were defined to mine the multivariate relationship among users for prediction. The experimental results show that, compared with the link prediction methods based on homogeneity and heterogeneity, the proposed method has a certain increase in accuracy, recall, F1-measure (F1) as well as Area Under the receiver operating characteristic Curve (AUC), and its AUC index is 4.69% higher than that of the link prediction method based on heterogeneity.
Reference
|
Related Articles
|
Metrics
Select
Performance analysis of motor imagery training based on 3D visual guidance
HU Min, LI Chong, LU Rongrong, HUANG Hongcheng
Journal of Computer Applications 2018, 38 (
3
): 836-841. DOI:
10.11772/j.issn.1001-9081.2017082010
Abstract
(
491
)
PDF
(992KB)(
426
)
Knowledge map
Save
To improve the training efficiency of Motor Imagery (MI) under visual guidance and the classification accuracy of Brain-Computer Interface (BCI), the influence of Virtual Reality (VR) environment on MI training and the differences of ElectroEncephaloGram (EEG) classification models under different visual guidance were studied. Firstly, three kinds of 3D hand interactive animation and EEG acquisition program were designed. Secondly, in the rendering environment of Helmet-Mounted Display (HMD) and planar Liquid Crystal Display (LCD), the left hand and right hand MI training was conducted on 5 healthy subjects, including standard experiment (the single experiment lasted for 5min) and long-time experiment (the single experiment lasted for 15min). Finally, through the pattern classification of EEG data, the influence of rendering environment and content form on classification accuracy was analyzed. The experimental results show that there is a significant difference in the presentation of HMD and LCD in visual guided MI training. The VR environment presented by HMD can improve the accuracy of MI classification and prolong the duration of single training. In addition, the classification model under different visual guidance content is also different. When the testing samples and training samples have the same visual guidance content, the average classification accuracy is 16.34% higher than that of different samples.
Reference
|
Related Articles
|
Metrics